10 research outputs found

    Certifying floating-point implementations using Gappa

    Full text link
    High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output

    Basic building blocks for a triple-double intermediate format

    Get PDF
    The implementation of correctly rounded elementary functions needs high intermediate accuracy before final rounding. This accuracy can be provided by (pseudo-) expansions of size three, i.e. a triple-double format. The report presents all basic operators for such a format. Triple-double numbers can be redundant. A renormalization procedure is presented and proven. Elementary functions' implementations need addition and multiplication sequences. These operators must take operands in double, double-double and triple-double format. The results must be accordingly in one of the formats. Several procedures are presented. Proofs are given for their accuracy bounds. Intermediate triple-double results must finally be correctly rounded to double precision. Two effective rounding sequences are presented, one for round-to-nearest mode, one for the directed rounding modes. Their complete proofs constitute half of the report.La mise en Ɠuvre de fonctions Ă©lĂ©mentaires correctement arrondies nĂ©cessite l'utilisation d'un format intermĂ©diaire de haute prĂ©cision avant l'arrondi final. Cette prĂ©cision peut ĂȘtre pourvue par des (pseudo-)expansions de taille trois, c'est-Ă -dire par un format triple-double. Ce rapport prĂ©sente tous les opĂ©rateurs de base d'un tel format. Le format des nombres triple double est redondant, aussi une procĂ©dure de renormalisation est-elle prĂ©sentĂ©e et prouvĂ©e. La mise en Ɠuvre de fonctions Ă©lĂ©mentaire sa besoin de sĂ©quences d'addition et de multiplication. Ces opĂ©rateurs doivent ĂȘtre capables de prendre en argument des opĂ©randes de format double, double-double ou triple-double. Leurs rĂ©sultats doivent ĂȘtre dans un des formats correspondants. Un certain nombre de procĂ©dures sont prĂ©sentĂ©es pour ces opĂ©rations avec des bornes prouvĂ©es pour leur prĂ©cision. Les rĂ©sultats intermĂ©diaires en triple-double doivent finalement ĂȘtre arrondis correctement vers la double prĂ©cision. Deux sĂ©quences d'arrondi final efficaces sont prĂ©sentĂ©es, une pour l'arrondi au plus prĂšs, une autre pour les modes d'arrondi dirigĂ©s. Leur preuve complĂšte constitue la moitiĂ© du rapport

    Optimizing polynomials for floating-point implementation

    Get PDF
    The floating-point implementation of a function on an interval often reduces to polynomial approximation, the polynomial being typically provided by Remez algorithm. However, the floating-point evaluation of a Remez polynomial sometimes leads to catastrophic cancellations. This happens when some of the polynomial coefficients are very small in magnitude with respects to others. In this case, it is better to force these coefficients to zero, which also reduces the operation count. This technique, classically used for odd or even functions, may be generalized to a much larger class of functions. An algorithm is presented that forces to zero the smaller coefficients of the initial polynomial thanks to a modified Remez algorithm targeting an incomplete monomial basis. One advantage of this technique is that it is purely numerical, the function being used as a numerical black box. This algorithm is implemented within a larger polynomial implementation tool that is demonstrated on a range of examples, resulting in polynomials with less coefficients than those obtained the usual way.Comment: 12 page

    Basic building blocks for a triple-double intermediate format

    Get PDF
    The implementation of correctly rounded elementary functions needs high intermediate accuracy before final rounding. This accuracy can be provided by (pseudo-) expansions of size three, i.e. a triple-double format. The report presents all basic operators for such a format. Triple-double numbers can be redundant. A renormalization procedure is presented and proven. Elementary functions' implementations need addition and multiplication sequences. These operators must take operands in double, double-double and triple-double format. The results must be accordingly in one of the formats. Several procedures are presented. Proofs are given for their accuracy bounds. Intermediate triple-double results must finally be correctly rounded to double precision. Two effective rounding sequences are presented, one for round-to-nearest mode, one for the directed rounding modes. Their complete proofs constitute half of the report.La mise en Ɠuvre de fonctions Ă©lĂ©mentaires correctement arrondies nĂ©cessite l'utilisation d'un format intermĂ©diaire de haute prĂ©cision avant l'arrondi final. Cette prĂ©cision peut ĂȘtre pourvue par des (pseudo-)expansions de taille trois, c'est-Ă -dire par un format triple-double. Ce rapport prĂ©sente tous les opĂ©rateurs de base d'un tel format. Le format des nombres triple double est redondant, aussi une procĂ©dure de renormalisation est-elle prĂ©sentĂ©e et prouvĂ©e. La mise en Ɠuvre de fonctions Ă©lĂ©mentaire sa besoin de sĂ©quences d'addition et de multiplication. Ces opĂ©rateurs doivent ĂȘtre capables de prendre en argument des opĂ©randes de format double, double-double ou triple-double. Leurs rĂ©sultats doivent ĂȘtre dans un des formats correspondants. Un certain nombre de procĂ©dures sont prĂ©sentĂ©es pour ces opĂ©rations avec des bornes prouvĂ©es pour leur prĂ©cision. Les rĂ©sultats intermĂ©diaires en triple-double doivent finalement ĂȘtre arrondis correctement vers la double prĂ©cision. Deux sĂ©quences d'arrondi final efficaces sont prĂ©sentĂ©es, une pour l'arrondi au plus prĂšs, une autre pour les modes d'arrondi dirigĂ©s. Leur preuve complĂšte constitue la moitiĂ© du rapport

    A correctly rounded implementationof the exponential function on the Intel Itanium architecture

    Get PDF
    This article presents an efficient implementation of a correctly rounded exponential function in double precision on the Intel Itanium processor family. This work combines advanced processor features (like the double-extended precision fused multiply-and-add units of the Itanium processors) with recent research results giving the worst-case precision needed for correctly rounding the exponential function. We give and prove an algorithm which returns a correctly rounded result (in any of the four IEEE-754 rounding modes) within 172 machine cycles on the 2 processor. This is about four times slower than the less accurate function present in the standard Intel mathematical library. The evaluation is performed in one phase only and is therefore fast even in the worst case, contrary to other implementations which use a multilevel strategy: We show that the worst-case required precision of 157 bits can always be stored in the sum of two double-extended floating-point numbers. Another algorithm is given with a 92 cycles execution time, but its proof has to be formally completed.Cet article prĂ©sente une implĂ©mentation efficace de la fonction exponentielle en double prĂ©cision avec arrondi correct sur famille de processeurs Intel Itanium. Ce travail combine les caractĂ©ristiques avancĂ©es de ces processeurs ( en particulier la prĂ©sence d'unitĂ©s de multiplication-accumulation en double prĂ©cisions Ă©tendue) avec les rĂ©sultats de travaux rĂ©cents surs les pires cas d'arrondi correct pour l'exponentielle. Nous dĂ©crivons et prouvons un algorithme qui calcule le rĂ©sultat arrondi correctement ( dans l'un quelconque des quatre modes d'arrondis spĂ©cifiĂ©s par la norme IEEE-754) en 172 cycles sur l'Itanium 2 d'Intel, soit environ quatre fois lus lentement que la fonction exponentielle standard, moins prĂ©cise. L'Ă©valuation est rĂ©alisĂ©e en une seule Ă©tape, et est donc rapide Ă©galement au pire cas, au contraire d'autres implĂ©mentations qui utilisent une stratĂ©gie multi-niveaux : nous montrons que les rĂ©sultats intermĂ©diaires ( sui ont besoin d'une prĂ©cision jusqu'Ă  157 bits) peuvent toujours ĂȘtre reprĂ©sentĂ©s somme la somme de deux nombres flottants en double prĂ©cision Ă©tendue. Nous prĂ©sentons Ă©galement un algorithme amĂ©liorĂ© qui rĂ©alise le calcul en 92 cycles, mis sa preuve reste Ă  formaliser complĂštemen

    A correctly rounded implementation of the exponential function . . .

    No full text
    This article presents an efficient implementation of a correctly rounded exponential function in double precision on the Intel Itanium processor family. This work combines advanced processor features (like the double-extended precision fused multiply-and-add units of the Itanium processors) with recent research results giving the worst-case precision needed for correctly rounding the exponential function. We give and prove an algorithm which returns a correctly rounded result (in any of the four IEEE-754 rounding modes) within 172 machine cycles on the Intel Itanium 2 processor. This is about four times slower than the less accurate function present in the standard Intel mathematical library. The evaluation is performed in one phase only and is therefore fast even in the worst case, contrary to other implementations which use a multilevel strategy [18, 6]: We show that the worst-case required precision of 157 bits can always be stored in the sum of two double-extended oating-point numbers. Another algorithm is given with a 92 cycles execution time, but its proof has to be formally completed

    An efficient rounding boundary test for pow(x,y) in double precision

    No full text
    International audienceThe correct rounding of the function pow: (x,y) → x^y is currently based on Ziv's iterative approximation process. In order to ensure its termination, cases when x^y falls on a rounding boundary must be filtered out. Such rounding boundaries are floating-point numbers and midpoints between two consecutive floating-point numbers. Detecting rounding boundaries for pow is a difficult problem. Previous approaches use repeated square root extraction followed by repeated square and multiply. This article presents a new rounding boundary test for pow in double precision which reduces this to a few comparisons with pre-computed constants. These constants are deduced from worst cases for the Table Maker's Dilemma, searched over a small subset of the input domain. This is a novel use of such worst-case bounds. The resulting algorithm has been designed for a fast-on-average correctly rounded implementation of pow, considering the scarcity of rounding boundary cases. It does not stall average computations for rounding boundary detection. The article includes its correctness proof and experimental results

    A certified infinite norm for the implementation of elementary functions

    No full text
    The high-quality floating-point implementation of useful functions f : R → R,such as exp, sin, erf requires bounding the error Δ = p−f f of an approximation p with regard to the function f. This involves bounding the infinite norm kΔk∞ of the error function. Its value must not be underestimated when implementations must be safe.Previous approaches for computing infinite norm are shown to be either unsafe,not sufficiently tight or too tedious in manual work.We present a safe and self-validating algorithm for automatically upper- andlower-bounding infinite norms of error functions. The algorithm is based onenhanced interval arithmetic. It can overcome high cancellation and high conditionnumber around points where the error function is defined only by continuousextension.The given algorithm is implemented in a software tool. It can generate a proofof correctness for each instance on which it is run.Pour garantir la qualitĂ© de l’implĂ©mentation en arithmĂ©tique flottante de fonctionsusuelles f : R → R telles que exp, sin, erf, il faut borner l’erreur Δ = p−ffcommise entre f et une approximation p. Cela implique de borner la normeinfinie kΔk∞de la fonction d’erreur. Si on veut que l’implĂ©mentation soit sure,on ne doit en aucun cas renvoyer une borne infĂ©rieure Ă  la valeur exacte.Nous montrons que les approches prĂ©cĂ©edentes visant Ă  calculer la norme infiniene sont pas satisfaisantes : soit elles ne sont pas sures, soit pas assez prĂ©cises,soit elles nĂ©cessitent un travail manuel trop fastidieux.Nous prĂ©sentons un algorithme sĂ»r, qui fournit une preuve de sa propre correction,et qui minore et majore automatiquement la norme infinie de fonctionsd’erreur. Cet algorithme est fondĂ© sur une version amĂ©liorĂ©e d’arithmĂ©tiqued’intervalle. Il peut contourner les difficultĂ©es dues Ă  une grande cancellationet un mauvais conditionnement autour de points oĂč la fonction d’erreur n’estdĂ©finie que par continuitĂ©.L’algorithme proposĂ© a Ă©tĂ© implĂ©mentĂ© dans un outil logiciel. Il peut gĂ©nĂ©rerune preuve de correction pour toute instance sur laquelle il est exĂ©cutĂ©
    corecore